Empirical evaluation of the improved Rprop learning algorithms

نویسندگان

  • Christian Igel
  • Michael Hüsken
چکیده

The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing first-order learning methods for neural networks. We discuss modifications of this algorithm that improve its learning speed. The new optimization methods are empirically compared to the existing Rprop variants, the conjugate gradient method, Quickprop, and the BFGS algorithm on a set of neural network benchmark problems. The improved Rprop outperforms the other methods; only the BFGS performs better in the later stages of learning on some of the test problems. For the analysis of the local search behavior, we compare the Rprop algorithms on general hyperparabolic error landscapes, where the new variants confirm their improvement.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Learning Rates Adaptation Strategy for the Resilient Propagation Algorithm

In this paper we propose an Rprop modification that builds on a mathematical framework for the convergence analysis to equip Rprop with a learning rates adaptation strategy that ensures the search direction is a descent one. Our analysis is supported by experiments illustrating how the new learning rates adaptation strategy works in the test cases to ameliorate the convergence behaviour of the ...

متن کامل

Sign-based learning schemes for pattern classification

This paper introduces a new class of sign-based training algorithms for neural networks that combine the sign-based updates of the Rprop algorithm with the composite nonlinear Jacobi method. The theoretical foundations of the class are described and a heuristic Rprop-based Jacobi algorithm is empirically investigated through simulation experiments in benchmark pattern classification problems. N...

متن کامل

Adapting Resilient Propagation for Deep Learning

The Resilient Propagation (Rprop) algorithm has been very popular for backpropagation training of multilayer feed-forward neural networks in various applications. The standard Rprop however encounters difficulties in the context of deep neural networks as typically happens with gradient-based learning algorithms. In this paper, we propose a modification of the Rprop that combines standard Rprop...

متن کامل

New globally convergent training scheme based on the resilient propagation algorithm

In this paper, a new globally convergent modification of the Resilient Propagation-Rprop algorithm is presented. This new addition to the Rprop family of methods builds on a mathematical framework for the convergence analysis that ensures that the adaptive local learning rates of the Rprop’s schedule generate a descent search direction at each iteration. Simulation results in six problems of th...

متن کامل

Experimental Study on the Precision Requirements of Rbf, Rprop and Bptt Training

Most neurocomputer architectures support only xed point arithmetic which allows a higher degree of VLSI integration but limits the range and precision of all variables. Up to now the eeect of this limitation on neural network training algorithms has been studied only for standard models like SOM or BP. This paper presents the results of an experimental study in which the precision requirements ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neurocomputing

دوره 50  شماره 

صفحات  -

تاریخ انتشار 2003